19 research outputs found

    Dynamics of quantum adiabatic evolution algorithm for Number Partitioning

    Full text link
    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size nn. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxilary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap, gmin=O(n2n/2)g_{\rm min}={\cal O}(n 2^{-n/2}), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simulteneous fipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimenssional quantum diffusion in the energy space. This effect provides a general limitation on the power of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.Comment: 32 pages, 5 figures, 3 Appendices; List of additions compare to v.3: (i) numerical solution of the stationary Schroedinger equation for the adiabatic eigenstates and eigenvalues; (ii) connection between the scaling law of the minimum gap with the problem size and the shape of the coarse-grained distribution of the adiabatic eigenvalues at the avoided-crossing poin

    A Model-Based Probabilistic Inversion Framework for Wire Fault Detection Using TDR

    Get PDF
    Time-domain reflectometry (TDR) is one of the standard methods for diagnosing faults in electrical wiring and interconnect systems, with a long-standing history focused mainly on hardware development of both high-fidelity systems for laboratory use and portable hand-held devices for field deployment. While these devices can easily assess distance to hard faults such as sustained opens or shorts, their ability to assess subtle but important degradation such as chafing remains an open question. This paper presents a unified framework for TDR-based chafing fault detection in lossy coaxial cables by combining an S-parameter based forward modeling approach with a probabilistic (Bayesian) inference algorithm. Results are presented for the estimation of nominal and faulty cable parameters from laboratory data

    Shielded-Twisted-Pair Cable Model for Chafe Fault Detection via Time-Domain Reflectometry

    Get PDF
    This report details the development, verification, and validation of an innovative physics-based model of electrical signal propagation through shielded-twisted-pair cable, which is commonly found on aircraft and offers an ideal proving ground for detection of small holes in a shield well before catastrophic damage occurs. The accuracy of this model is verified through numerical electromagnetic simulations using a commercially available software tool. The model is shown to be representative of more realistic (analytically intractable) cable configurations as well. A probabilistic framework is developed for validating the model accuracy with reflectometry data obtained from real aircraft-grade cables chafed in the laboratory

    On Resolution of the Selectivity/Conductivity Paradox for the Potassium Ion Channel

    Get PDF
    The ability of the potassium channel to conduct K+ at almost the rate of free diffusion, while discriminating strongly against the (smaller) Na+ ion, is of enormous biological importance [1]. Yet its function remains at the center of a “many-voiced debate” [2,3]. In this presentation, a first-principles explanation is provided for the seemingly paradoxical coexistence of high conductivity with high selectivity between monovalent ions within the channel. It is shown that the conductivity of the selectivity filter is described by the generalized Einstein relation. A novel analytic approach to the analysis of the conductivity is proposed, based on the derivation of an effective grand canonical ensemble for ions within the filter. The conditions for barrier-less diffusion-limited conduction through the KcsA filter are introduced, and the relationships between system parameters required to satisfy these conditions are derived. It is shown that the Eisenman selectivity equation is one of these, and that it follows directly from the condition for barrier-less conduction. The proposed theory provides analytical insight into the “knock-on” [1] and Coulomb blockade [4] mechanisms of K+ conduction through the KcsA filter. It confirms and illuminates an earlier argument [3] that the “snug-fit" model cannot describe the fast diffusion-limited conduction seen in experiments. Numerical examples are provided illustrating agreement of the theory with experimentally-measured I-V curves. The results are not restricted to biological systems, but also carry implications for the design of artificial nanopores

    Development of an On-board Failure Diagnostics and Prognostics System for Solid Rocket Booster

    Get PDF
    We develop a case breach model for the on-board fault diagnostics and prognostics system for subscale solid-rocket boosters (SRBs). The model development was motivated by recent ground firing tests, in which a deviation of measured time-traces from the predicted time-series was observed. A modified model takes into account the nozzle ablation, including the effect of roughness of the nozzle surface, the geometry of the fault, and erosion and burning of the walls of the hole in the metal case. The derived low-dimensional performance model (LDPM) of the fault can reproduce the observed time-series data very well. To verify the performance of the LDPM we build a FLUENT model of the case breach fault and demonstrate a good agreement between theoretical predictions based on the analytical solution of the model equations and the results of the FLUENT simulations. We then incorporate the derived LDPM into an inferential Bayesian framework and verify performance of the Bayesian algorithm for the diagnostics and prognostics of the case breach fault. It is shown that the obtained LDPM allows one to track parameters of the SRB during the flight in real time, to diagnose case breach fault, and to predict its values in the future. The application of the method to fault diagnostics and prognostics (FD&P) of other SRB faults modes is discussed

    Simulation of Guided-Wave Ultrasound Propagation in Composite Laminates: Benchmark Comparisons of Numerical Codes and Experiment

    Get PDF
    Ultrasonic wave methods constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials, such as carbon fiber reinforced polymer (CFRP) laminates. Computational models of ultrasonic wave excitation, propagation, and scattering in CFRP composites can be extremely valuable in designing practicable NDE and SHM hardware, software, and methodologies that accomplish the desired accuracy, reliability, efficiency, and coverage. The development and application of ultrasonic simulation approaches for composite materials is an active area of research in the field of NDE. This paper presents comparisons of guided wave simulations for CFRP composites implemented using four different simulation codes: the commercial finite element modeling (FEM) packages ABAQUS, ANSYS, and COMSOL, and a custom code executing the Elastodynamic Finite Integration Technique (EFIT). Benchmark comparisons are made between the simulation tools and both experimental laser Doppler vibrometry data and theoretical dispersion curves. A pristine and a delamination type case (Teflon insert in the experimental specimen) is studied. A summary is given of the accuracy of simulation results and the respective computational performance of the four different simulation tools

    A New Computational Framework for Atmospheric and Surface Remote Sensing

    No full text
    A Bayesian data-analysis framework is described for atmospheric and surface retrievals from remotely-sensed hyper-spectral data. Some computational techniques are high- lighted for improved accuracy in the forward physics model

    Holographic Optical Data Storage

    No full text
    Although the basic idea may be traced back to the earlier X-ray diffraction studies of Sir W. L. Bragg, the holographic method as we know it was invented by D. Gabor in 1948 as a two-step lensless imaging technique to enhance the resolution of electron microscopy, for which he received the 1971 Nobel Prize in physics. The distinctive feature of holography is the recording of the object phase variations that carry the depth information, which is lost in conventional photography where only the intensity (= squared amplitude) distribution of an object is captured. Since all photosensitive media necessarily respond to the intensity incident upon them, an ingenious way had to be found to convert object phase into intensity variations, and Gabor achieved this by introducing a coherent reference wave along with the object wave during exposure. Gabor's in-line recording scheme, however, required the object in question to be largely transmissive, and could provide only marginal image quality due to unwanted terms simultaneously reconstructed along with the desired wavefront. Further handicapped by the lack of a strong coherent light source, optical holography thus seemed fated to remain just another scientific curiosity, until the field was revolutionized in the early 1960s by some major breakthroughs: the proposition and demonstration of the laser principle, the introduction of off-axis holography, and the invention of volume holography. Consequently, the remainder of that decade saw an exponential growth in research on theory, practice, and applications of holography. Today, holography not only boasts a wide variety of scientific and technical applications (e.g., holographic interferometry for strain, vibration, and flow analysis, microscopy and high-resolution imagery, imaging through distorting media, optical interconnects, holographic optical elements, optical neural networks, three-dimensional displays, data storage, etc.), but has become a prominent am advertising, and security medium as well. The evolution of holographic optical memories has followed a path not altogether different from holography itself, with several cycles of alternating interest over the past four decades. P. J. van Heerden is widely credited for being the first to elucidate the principles behind holographic data storage in a 1963 paper, predicting bit storage densities on the order of 1/lambda(sup 3) with source wavelength lambda - a fantastic capacity of nearly 1 TB/cu cm for visible light! The science and engineering of such a storage paradigm was heavily pursued thereafter, resulting in many novel hologram multiplexing techniques for dense data storage, as well as important advances in holographic recording materials. Ultimately, however, the lack of such enabling technologies as compact laser sources and high performance optical data I/O devices dampened the hopes for the development of a commercial product. After a period of relative dormancy, successful applications of holography in other arenas sparked a renewed interest in holographic data storage in the late 1980s and the early 1990s. Currently, with most of the critical optoelectronic device technologies in place and the quest for an ideal holographic recording medium intensified, holography is once again considered as one of several future data storage paradigms that may answer our constantly growing need for higher-capacity and faster-access memories

    Steady-State Characterization of Bacteriorhodopsin-D85N Photocycle

    No full text
    An operational characterization of the photocycle of the genetic mutant D85N of bacteriorhodopsin, BR-D85N, is presented. Steady-state bleach spectra and pump-probe absorbance data are obtained with thick hydrated films containing BR-D85N embedded in a gelatin host. Simple two- and three-state models are used to analyze the photocycle dynamics and extract relevant information such as pure-state absorption spectra, photochemical-transition quantum efficiencies, and thermal lifetimes of dominant states appearing in the photocycle, the knowledge of which should aid in the analysis of optical recording and retrieval of data in films incorporating this photochromic material. The remarkable characteristics of this material and their implications from the viewpoint of optical data storage and processing are discussed

    Case breach fault model and diagnostic for the solid rocket

    No full text
    A model of the case breach fault for solid rocket boosters (SRBs) that takes into account burning-through hole in propellant, insulator and metal layers of a rocket case is developed. Melting of the metal layer and an ablation of the insulator layer in the presence of hot gas flow through the hole are analyzed in detail. Dynamics of the lateral (side) thrust produced by the growing hole is calculated for typical parameters of the SRB. A problem of inference of the fault parameters from the measurements of the nominal values of stagnation pressure and thrust is formulated and solved in quasi-steady approximation. An application of the recently developed Bayesian framework for diagnostics and prognostics of the case breach fault that can cause loss of flight control is discussed
    corecore